24th April 2025
Artificial Intelligence (AI) is no longer on the horizon of insurance innovation — it’s embedded in underwriting models, claims triage systems, and customer-facing tools. Yet, while the industry accelerates its adoption of AI-driven technologies, regulators and market participants alike are still wrestling with foundational questions: What exactly constitutes AI in insurance? Who should be held accountable when things go wrong? And how do we ensure appropriate oversight of such systems?
These were among the questions raised during a recent Lloyd’s market seminar, AI in the Insurance Industry: The Emerging Regulatory Landscape, hosted by litigation specialists Freshfields. The session underscored both the potential and the challenges that come with deploying AI in a heavily regulated, risk-based industry like insurance.
A shifting regulatory picture
Despite the growing presence of AI tools, both the UK government and Lloyd’s itself currently impose what can be seen as minimal formal regulation on their use within insurance. This regulatory lag raises practical challenges: How do insurers ensure they’re using AI responsibly in the absence of defined guidelines? More pressingly, where does liability fall when AI-led decisions result in harm or loss?
One of the key takeaways from the Lloyd’s session was the legal ambiguity between AI deployers (insurers, brokers, MGAs) and developers (technology vendors). Given AI systems’ ability to infer, adapt, and operate autonomously, regulators may eventually require greater clarity on the chain of accountability — a topic already under consideration within the EU’s AI Act.
There’s also a training and understanding gap that’s beginning to surface. As the market embraces AI tools built by third-party providers or insurtech partners, there’s a pressing need for operational teams — including those managing delegated authority frameworks — to understand how these technologies function in practice. Are these tools being validated? Is there an audit trail for AI-driven decision-making? These questions are not just technical; they’re governance-critical.
Implications for delegated authority management
Within the delegated authority ecosystem, AI brings both opportunities and new responsibilities. MGAs and TPAs increasingly rely on digital triage systems, smart rating engines, and fraud detection algorithms. As custodians of oversight frameworks, our role at Davies Insurance Services is to ensure that these innovations are integrated within a compliant and controlled environment.
This raises practical considerations: Should AI systems used in delegated schemes be subject to IT audits, much like policy administration platforms or bordereaux reporting systems? Should clients expect us to provide assurance around algorithmic decision-making? As the regulatory landscape matures, there may be scope for delegated authority managers to extend their services to cover AI governance — including audit, assurance, and compliance mapping.
Broader market trends
The wider Davies Group has been actively engaging with the evolution of AI. The group’s focus on digital transformation includes AI-led claims analytics and customer service automation in specific business lines. Davies has emphasised the need for “augmented intelligence” — combining AI with human oversight to enhance, not replace, decision-making. This hybrid approach is particularly relevant in insurance, where judgement and context remain critical.
More broadly, reports by the likes of PwC and Deloitte suggest that while the UK insurance sector is investing heavily in AI, regulatory uncertainty may be slowing down innovation. As highlighted in Deloitte’s recent UK Insurance Outlook, navigating AI adoption without a unified regulatory framework poses a risk of falling behind more agile international markets. That sentiment was echoed during the Lloyd’s seminar: with the EU and US already shaping AI-specific laws, there’s a risk that the UK’s insurance market could fall behind.
Looking ahead
For those of us involved in managing delegated authority schemes, the key will be balancing innovation with oversight. The benefits of AI — faster claims resolution, improved fraud detection, better customer experiences — are well known. But they come with governance demands that we, as an industry, are only beginning to define.
As AI becomes more embedded in the insurance value chain, our challenge will be to ensure that these technologies operate transparently, ethically, and in compliance with evolving standards. At Davies, our goal is to stay ahead of this curve — not just as a service provider, but as a partner in shaping how insurance manages the risks and rewards of AI.
If you would like to continue the conversation, get in touch with Delegated Authority Manager, Stuart Cheers-Berry at stuart.cheersberry@davies-group.com
Article written by Stuart Cheers-Berry. When I joined the Insurance Solutions…
Geoffrey Cole joined Davies in 2024 as a Broker Technician for…
Davies, the leading specialist professional services and technology company serving insurance…
Davies, the leading specialist professional services and technology business serving insurance…